Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Jack Clark"


13 mentions found


Dario Amodei, Daniela Amodei, Tom Brown, Jack Clark, Jared Kaplan, and Sam McCandlish, cofounders of AnthropicAnthropic's Dario Amodei, Jack Clark, and Daniela Amodei. Since then, the company has received billions in funding from both Google and Amazon in what some have termed an "AI arms race." CEO Dario Amodei, a former Google Brain researcher with a Ph.D. in computational neuroscience, has been writing about the cataclysmic potential of AI since 2016. Constitutional AI is partly the brainchild of two other OpenAI alums and Anthropic cofounders, Tom Brown and Jared Kaplan. Both Kaplan and Brown have worked on Anthropic's efforts to "red team" the company's flagship language model, Claude, probing for misuse possibilities.
Persons: Dario Amodei, Daniela Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, Anthropic Anthropic's Dario Amodei, Menlo Ventures Anthropic, Amodei's, Anthropic, , Anthropic cofounders, Brown, Kaplan, Johns Hopkins, Claude, AGI, I'm Organizations: Google, Menlo Ventures, Bloomberg, Johns, OpenAI Locations: OpenAI, GPT
Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity. The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. In a speech last week, Sunak said only governments — not AI companies — can keep people safe from the technology’s risks. Frontier AI is shorthand for the latest and most powerful systems that go right up to the edge of AI’s capabilities. That makes frontier AI systems “dangerous because they’re not perfectly knowledgeable,” Clune said.
Persons: , Rishi Sunak, It’s, Kamala Harris, Ursula von der Leyen, Google’s, Alan Turing, Sunak, , Jeff Clune, Clune, Elon, Sam Altman, He’s, Joe Biden, Geoffrey Hinton, Yoshua, ” Clune, , it's, Francine Bennett, Ada Lovelace, Deb Raji, ” Raji, it’s, shouldn’t, Raji, DeepMind, Anthropic, Dario Amodei, Jack Clark, , Carsten Jung, Jill Lawless Organizations: British, U.S, European, University of British, AI Safety, European Union, Clune, Ada, Ada Lovelace Institute, House, University of California, ” Tech, Microsoft, Institute for Public Policy Research, Regulators, Associated Press Locations: Bletchley, University of British Columbia, State, EU, Brussels, China, U.S, Beijing, London, Berkeley
Sen. Chuck Schumer is hosting sessions to help lawmakers understand and shape future rules on AI. AdvertisementAdvertisementMeredith Whittaker, president of messaging app Signal and former director of AI think tank the AI Now Institute, posted on X: "This is the room you pull together when your staffers want pictures with tech industry AI celebrities. It's not the room you'd assemble when you want to better understand what AI is, how (and for whom) it functions, and what to do about it." This is the room you pull together when your staffers want pictures with tech industry AI celebrities. It's not the room you'd assemble when you want to better understand what AI is, how (and for whom) it functions, and what to do about it.
Persons: Sen, Chuck Schumer, Axios, Mark Zuckerberg —, Sam Altman, Sundar Pichai, Satya Nadella, Elon Musk, Mark Zuckerberg, Per, Maria Curi, Rumman Chowdhury, Deborah Raji, Tristan Harris, b9mJhW39NW — Maria Cristina Curi, OpenAI, Alex Karp, Jack Clark, Clement Delangue, Meredith Whittaker, It's, UM1EhFNb1H — Meredith Whittaker, Face's Delangue Organizations: Morning, Google, Microsoft, Mozilla, Center of Humane, Meta, Nvidia Locations: Washington
The UN has an opportunity to set globally agreed-upon rules of the road for monitoring and regulating AI, Guterres said Tuesday at a first-ever meeting of the UN Security Council devoted to AI governance. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. “China firmly opposes these behaviors.”Zhang’s remarks come on the heels of reports that the US government may seek to limit the flow of powerful artificial intelligence chips to China. Addressing the security council via teleconference, Jack Clark, the co-founder of the AI company Anthropic, urged member states not to allow private companies to dominate the development of artificial intelligence. “We cannot leave the development of artificial intelligence solely to private sector actors,” Clark said.
Persons: António Guterres, Guterres, , James, Zhang Jun, ” Zhang, Zhang’s, Jack Clark, ” Clark Organizations: CNN, United, UN, Council, , Tech Locations: United Nations, China, United States, teleconference
[1/5] The U.N. Security Council holds a meeting on Artificial intelligence at U.N. headquarters in New York City, U.S., July 18, 2023. "Both military and non-military applications of AI could have very serious consequences for global peace and security," Guterres said. Ambassador Zhang Jun described AI as a "double-edged sword" and said Beijing supports a central coordinating role of the U.N. on establishing guiding principles for AI. "No member states should use AI to censor, constrain, repress or disempower people," he told the council. Russia questioned whether the council, which is charged with maintaining international peace and security, should be discussing AI.
Persons: Brendan McDermid, James, Britain's, Antonio Guterres, Jack Clark, Zeng Yi, Guterres, U.N, Zhang Jun, Zhang, Jeffrey DeLaurentis, Dmitry Polyanskiy, Michelle Nichols, Aurora Ellis Organizations: . Security, REUTERS, Brendan McDermid UNITED NATIONS, United Nations Security Council, Britain's, U.N, China -, Research Center, AI, International Atomic Energy Agency, International Civil Aviation Organization, U.S, Thomson Locations: New York City, U.S, China, United States, Beijing, Russia
Training AI models in data centers uses up to three times more energy than traditional cloud tasks. A warning from a Microsoft data center veteranA Microsoft data center. MicrosoftTom Keane, who oversaw Microsoft's cloud data centers for about two decades, recently warned about this. An AI data center will need up to three times more power than a traditional cloud facility, he estimated. "The data center of the future is not in Virginia, it's not in Santa Clara, it's not in Dallas, Texas," Ganzi said.
Persons: Marc Ganzi, Cowen, Nammo, TikTok, Jack Clark, Matthew Barakat, Shaolei Ren, Microsoft Tom Keane, Keane, Bernstein, Mark Moerdler, DigitalBridge, Ganzi, it's, Ellen Thomas Organizations: Dominion Energy, Amazon, Microsoft, Google, McKinsey, Big Tech, Financial Times, AP, Nvidia, UC Riverside Locations: Northern Virginia, Manassas , Virginia, Virginia, DataBank, Santa Clara, Dallas , Texas
Nvidia GPUs are essential for training the big models behind ChatGPT and other generative AI tools. That gives big tech companies an advantage over smaller startups. Nvidia GPUs are the water that feeds today's flourishing AI ecosystem. This is giving Big Tech companies another huge advantage over smaller upstarts. Many startups can't afford that, or they just can't get hold of the chips.
Persons: TSMC, OpenAI, Bing, Nat Friedman, Daniel Gross, That's, Friedman, Gross, Jack Clark Organizations: Nvidia, Big Tech, Microsoft, Google
Interviews with a U.S. senator, congressional staffers, AI companies and interest groups show there are a number of options under discussion. Some proposals focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance. Other possibilities include rules to ensure AI isn't used to discriminate or violate someone's civil rights. Another debate is whether to regulate the developer of AI or the company that uses it to interact with consumers. GOVERNMENT MICROMANAGEMENTThe risk-based approach means AI used to diagnose cancer, for example, would be scrutinized by the Food and Drug Administration, while AI for entertainment would not be regulated.
Don Denkinger was regarded as one of the finest major-league umpires of his time. Working in the American League from 1969 to 1998, he was assigned to four World Series and three All-Star Games. But when Denkinger died Friday in Waterloo, Iowa, at 86, he was remembered mostly for his famously botched call on baseball’s greatest stage. In 1985, Denkinger was umpiring at first base in Game 6 of the World Series between the St. Louis Cardinals and the Kansas City Royals. The Royals’ Jorge Orta, who led off, hit a bounder to the Cardinals’ first baseman, Jack Clark.
Mass event will let hackers test limits of A.I. technology
  + stars: | 2023-05-10 | by ( ) www.cnbc.com   time to read: +6 min
But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data. "Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment.
The moral values guidelines, which Anthropic calls Claude's constitution, draw from several sources, including the United Nations Declaration on Human Rights and even Apple Inc's (AAPL.O) data privacy rules. Anthropic was founded by former executives from Microsoft Corp-backed (MSFT.O) OpenAI to focus on creating safe AI systems that will not, for example, tell users how to build a weapon or use racially biased language. Co-founder Dario Amodei was one of several AI executives who met with Biden last week to discuss potential dangers of AI. Anthropic takes a different approach, giving its Open AI competitor Claude a set of written moral values to read and learn from as it makes decisions on how to respond to questions. "In a few months, I predict that politicians will be quite focused on what the values are of different AI systems, and approaches like constitutional AI will help with that discussion because we can just write down the values," Clark said.
Amodei chatted with Insider about her approach to trust and safety and what the future holds for AI. However, the majority of Anthropic cofounder and president Daniela Amodei's career has been spent trying to prove the opposite: that trust and safety is a feature, not a bug. "It's an organizational structure question, but it's also a mindset question," she told Insider. In 2020, Amodei and six other OpenAI employees, including her brother Dario Amodei, left the company to start rival AI lab Anthropic. Throughout Anthropic's growth, the company has kept an interdisciplinary culture, with employees whose experiences range from physics to computational biology to policywriting, Amodei told Insider.
Jack Clark was a Bloomberg tech journalist in 2015 when he came across OpenAI for the first time. He was so inspired that he quit his job and dove into the world of AI, later cofounding Anthropic. Now, he writes Import AI, a weekly AI-focused newsletter that reaches over 34,000 subscribers. A "weird" newsletterA weekly newsletter, Import AI features detailed analyses on AI research papers, Clark's thoughts on current events, and AI-focused short fiction stories. He estimates that he's read around 4,000 research papers while writing Import AI — and more importantly, he jokes, spent over $6,000 in lattes due to his persistent habit of drinking multiple caffeinated beverages while writing each week.
Total: 13